Defending against Backdoors in Federated Learning with Robust Learning Rate
نویسندگان
چکیده
Federated learning (FL) allows a set of agents to collaboratively train model without sharing their potentially sensitive data. This makes FL suitable for privacy-preserving applications. At the same time, is susceptible adversarial attacks due decentralized and unvetted One important line against backdoor attacks. In attack, an adversary tries embed functionality during training that can later be activated cause desired misclassification. To prevent attacks, we propose lightweight defense requires minimal change protocol. high level, our based on carefully adjusting aggregation server's rate, per dimension round, sign information agents' updates. We first conjecture necessary steps carry successful attack in setting, then, explicitly formulate conjecture. Through experiments, provide empirical evidence supports conjecture, test under different settings. observe either completely eliminated, or its accuracy significantly reduced. Overall, experiments suggest outperforms some recently proposed defenses literature. achieve this by having influence over trained models. addition, also convergence rate analysis scheme.
منابع مشابه
Defending Non-Bayesian Learning against Adversarial Attacks
Abstract This paper addresses the problem of non-Bayesian learning over multi-agent networks, where agents repeatedly collect partially informative observations about an unknown state of the world, and try to collaboratively learn the true state. We focus on the impact of the adversarial agents on the performance of consensus-based non-Bayesian learning, where non-faulty agents combine local le...
متن کاملAuror: defending against poisoning attacks in collaborative deep learning systems
Deep learning in a collaborative setting is emerging as a cornerstone of many upcoming applications, wherein untrusted users collaborate to generate more accurate models. From the security perspective, this opens collaborative deep learning to poisoning attacks, wherein adversarial users deliberately alter their inputs to mis-train the model. These attacks are known for machine learning systems...
متن کاملDefending Network-Centric Systems using Backdoors
As computing systems are increasingly depending on networking, they are also becoming more vulnerable to networking malfunctioning or misuse. Human intervention is not a solution when computer system monitoring and repairing must be done fast and reliably regardless of scale, networking availability, or system impairing. Future network-centric systems must be built around a defensive architectu...
متن کاملDefending Against Speculative Attacks: Reputation, Learning, and Coordination
How does the central bank’s incentive to build a reputation affect speculators’ ability to coordinate and the likelihood of the devaluation outcome during speculative currency crises? What role does market information play in speculators’ coordination and the central bank’s reputation building? I address these questions in a dynamic regime change game that highlights the interaction between the...
متن کاملBackdoors in the Context of Learning
The concept of backdoor variables has been introduced as a structural property of combinatorial problems that provides insight into the surprising ability of modern satisfiability (SAT) solvers to tackle extremely large instances. This concept is, however, oblivious to “learning” during search—a key feature of successful combinatorial reasoning engines for SAT, mixed integer programming (MIP), ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence
سال: 2021
ISSN: ['2159-5399', '2374-3468']
DOI: https://doi.org/10.1609/aaai.v35i10.17118